195 research outputs found
Context Semantics, Linear Logic and Computational Complexity
We show that context semantics can be fruitfully applied to the quantitative
analysis of proof normalization in linear logic. In particular, context
semantics lets us define the weight of a proof-net as a measure of its inherent
complexity: it is both an upper bound to normalization time (modulo a
polynomial overhead, independently on the reduction strategy) and a lower bound
to the number of steps to normal form (for certain reduction strategies).
Weights are then exploited in proving strong soundness theorems for various
subsystems of linear logic, namely elementary linear logic, soft linear logic
and light linear logic.Comment: 22 page
Infinitary -Calculi from a Linear Perspective (Long Version)
We introduce a linear infinitary -calculus, called
, in which two exponential modalities are available, the
first one being the usual, finitary one, the other being the only construct
interpreted coinductively. The obtained calculus embeds the infinitary
applicative -calculus and is universal for computations over infinite
strings. What is particularly interesting about , is that
the refinement induced by linear logic allows to restrict both modalities so as
to get calculi which are terminating inductively and productive coinductively.
We exemplify this idea by analysing a fragment of built around
the principles of and . Interestingly, it enjoys
confluence, contrarily to what happens in ordinary infinitary
-calculi
An Invariant Cost Model for the Lambda Calculus
We define a new cost model for the call-by-value lambda-calculus satisfying
the invariance thesis. That is, under the proposed cost model, Turing machines
and the call-by-value lambda-calculus can simulate each other within a
polynomial time overhead. The model only relies on combinatorial properties of
usual beta-reduction, without any reference to a specific machine or evaluator.
In particular, the cost of a single beta reduction is proportional to the
difference between the size of the redex and the size of the reduct. In this
way, the total cost of normalizing a lambda term will take into account the
size of all intermediate results (as well as the number of steps to normal
form).Comment: 19 page
On Constructor Rewrite Systems and the Lambda Calculus
We prove that orthogonal constructor term rewrite systems and lambda-calculus
with weak (i.e., no reduction is allowed under the scope of a
lambda-abstraction) call-by-value reduction can simulate each other with a
linear overhead. In particular, weak call-by- value beta-reduction can be
simulated by an orthogonal constructor term rewrite system in the same number
of reduction steps. Conversely, each reduction in a term rewrite system can be
simulated by a constant number of beta-reduction steps. This is relevant to
implicit computational complexity, because the number of beta steps to normal
form is polynomially related to the actual cost (that is, as performed on a
Turing machine) of normalization, under weak call-by-value reduction.
Orthogonal constructor term rewrite systems and lambda-calculus are thus both
polynomially related to Turing machines, taking as notion of cost their natural
parameters.Comment: 27 pages. arXiv admin note: substantial text overlap with
arXiv:0904.412
On Sharing, Memoization, and Polynomial Time (Long Version)
We study how the adoption of an evaluation mechanism with sharing and
memoization impacts the class of functions which can be computed in polynomial
time. We first show how a natural cost model in which lookup for an already
computed value has no cost is indeed invariant. As a corollary, we then prove
that the most general notion of ramified recurrence is sound for polynomial
time, this way settling an open problem in implicit computational complexity
(Leftmost-Outermost) Beta Reduction is Invariant, Indeed
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
lambda-calculus a reasonable machine? Is there a way to measure the
computational complexity of a lambda-term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of lambda-calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating lambda-calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modeled after
linear logic proof nets and admitting a decomposition of leftmost-outermost
derivations with the desired property. Thus, the LSC is invariant with respect
to, say, random access machines. The second step is to show that LSC is
invariant with respect to the lambda-calculus. The size explosion problem seems
to imply that this is not possible: having the same notions of normal form,
evaluation in the LSC is exponentially longer than in the lambda-calculus. We
solve such an impasse by introducing a new form of shared normal form and
shared reduction, deemed useful. Useful evaluation avoids those steps that only
unshare the output without contributing to beta-redexes, i.e. the steps that
cause the blow-up in size. The main technical contribution of the paper is
indeed the definition of useful reductions and the thorough analysis of their
properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331
- …